首页> 外文OA文献 >Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
【2h】

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

机译:使用生成器的照片真实单图像超分辨率   对抗网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Despite the breakthroughs in accuracy and speed of single imagesuper-resolution using faster and deeper convolutional neural networks, onecentral problem remains largely unsolved: how do we recover the finer texturedetails when we super-resolve at large upscaling factors? The behavior ofoptimization-based super-resolution methods is principally driven by the choiceof the objective function. Recent work has largely focused on minimizing themean squared reconstruction error. The resulting estimates have high peaksignal-to-noise ratios, but they are often lacking high-frequency details andare perceptually unsatisfying in the sense that they fail to match the fidelityexpected at the higher resolution. In this paper, we present SRGAN, agenerative adversarial network (GAN) for image super-resolution (SR). To ourknowledge, it is the first framework capable of inferring photo-realisticnatural images for 4x upscaling factors. To achieve this, we propose aperceptual loss function which consists of an adversarial loss and a contentloss. The adversarial loss pushes our solution to the natural image manifoldusing a discriminator network that is trained to differentiate between thesuper-resolved images and original photo-realistic images. In addition, we usea content loss motivated by perceptual similarity instead of similarity inpixel space. Our deep residual network is able to recover photo-realistictextures from heavily downsampled images on public benchmarks. An extensivemean-opinion-score (MOS) test shows hugely significant gains in perceptualquality using SRGAN. The MOS scores obtained with SRGAN are closer to those ofthe original high-resolution images than to those obtained with anystate-of-the-art method.
机译:尽管使用更快,更深的卷积神经网络在单图像超分辨率的精度和速度方面取得了突破,但一个中心问题仍然未解决:当我们在较大的放大比例下进行超分辨时,如何恢复更精细的纹理细节?基于优化的超分辨率方法的行为主要由目标函数的选择决定。最近的工作主要集中在最大程度地减少重建主题误差。得出的估计值具有很高的峰值信噪比,但是它们通常缺少高频细节,并且在感知上无法满足更高分辨率下的保真度,因此在感知上并不令人满意。在本文中,我们提出了用于图像超分辨率(SR)的老龄对抗网络(GAN)SRGAN。据我们所知,它是第一个能够为4倍放大因子推断出逼真的自然图像的框架。为此,我们提出了一种感知损失函数,该函数由对抗损失和内容损失组成。对抗性损失使用鉴别器网络将我们的解决方案推向自然图像,该鉴别器网络经过训练可区分超分辨图像和原始照片级逼真的图像。此外,我们使用的是感知相似性而非像素空间相似性引起的内容损失。我们的深层残差网络能够在公共基准上从严重降采样的图像中恢复逼真的纹理。广泛的均值评分(MOS)测试表明,使用SRGAN可以显着提高感知质量。用SRGAN获得的MOS得分比使用任何最新方法获得的MOS得分更接近原始高分辨率图像的MOS得分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号